XClose

Gatsby Computational Neuroscience Unit

Home
Menu

David Balduzzi

 

Wednesday 28th March 2018

 

Time:4.00pm

 

Ground Floor Seminar Room

25 Howland Street, London, W1T 4JG

 

Gradients in Games

Algorithms that optimize multiple objective functions have proliferated recently -- including generative adversarial networks (GANs), synthetic gradients, intrinsic-curiosity, and others.  More generally, there’s a shift from end-to-end learning on a single loss, towards modular architectures composed of sub-goals and sub-losses. However, very little is understood about these settings, where there’s no longer a loss landscape, and gradient descent doesn’t necessarily descend. In this talk, I will discuss the general setting, recent work on the geometry of interacting losses, and implications for learning.